9 research outputs found

    A Fast Algorithm for Sparse Controller Design

    Full text link
    We consider the task of designing sparse control laws for large-scale systems by directly minimizing an infinite horizon quadratic cost with an â„“1\ell_1 penalty on the feedback controller gains. Our focus is on an improved algorithm that allows us to scale to large systems (i.e. those where sparsity is most useful) with convergence times that are several orders of magnitude faster than existing algorithms. In particular, we develop an efficient proximal Newton method which minimizes per-iteration cost with a coordinate descent active set approach and fast numerical solutions to the Lyapunov equations. Experimentally we demonstrate the appeal of this approach on synthetic examples and real power networks significantly larger than those previously considered in the literature

    Optimizing Optimization: Scalable Convex Programming with Proximal Operators

    No full text
    Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algorithms required for implementation. Especially as datasets grow in size, there is a significant gap in speed and scalability between general solvers and specialized algorithms. This thesis addresses this gap with a new model for convex programming based on an intermediate representation of convex problems as a sum of functions with efficient proximal operators. This representation serves two purposes: 1) many problems can be expressed in terms of functions with simple proximal operators, and 2) the proximal operator form serves as a general interface to any specialized algorithm that can incorporate additional `2-regularization. On a single CPU core, numerical results demonstrate that the prox-affine form results in significantly faster algorithms than existing general solvers based on conic forms. In addition, splitting problems into separable sums is attractive from the perspective of distributing solver work amongst multiple cores and machines. We apply large-scale convex programming to several problems arising from building the next-generation, information-enabled electrical grid. In these problems (as is common in many domains) large, high-dimensional datasets present opportunities for novel data-driven solutions. We present approaches based on convex models for several problems: probabilistic forecasting of electricity generation and demand, preventing failures in microgrids and source separation for whole-home energy disaggregation.</p

    Large-scale Probabilistic Forecasting in Energy Systems using Sparse Gaussian Conditional Random Fields

    No full text
    Short-term forecasting is a ubiquitous practice in a wide range of energy systems, including forecasting demand, renewable generation, and electricity pricing. Although it is known that probabilistic forecasts (which give a distribution over possible future outcomes) can improve planning and control, many forecasting systems in practice are just used as “point forecast” tools, as it is challenging to represent high-dimensional non-Gaussian distributions over multiple spatial and temporal points. In this paper, we apply a recently-proposed algorithm for modeling high-dimensional conditional Gaussian distributions to forecasting wind power and extend it to the non-Gaussian case using the copula transform. On a wind power forecasting task, we show that this probabilistic model greatly outperforms other methods on the task of accurately modeling potential distributions of power (as would be necessary in a stochastic dispatch problem, for example).</p

    Sparse Gaussian Conditional Random Fields: Algorithms, Theory, and Application to Energy Forecasting

    No full text
    This paper considers the sparse Gaussian conditional random field, a discriminative extension of sparse inverse covariance estimation, where we use convex methods to learn a high-dimensional conditional distribution of outputs given inputs. The model has been proposed by multiple researchers within the past year, yet previous papers have been substantially limited in their analysis of the method and in the ability to solve large-scale problems. In this paper, we make three contributions: 1) we develop a second-order active-set method which is several orders of magnitude faster that previously proposed optimization approaches for this problem 2) we analyze the model from a theoretical standpoint, improving upon past bounds with convergence rates that depend logarithmically on the data dimension, and 3) we apply the method to large-scale energy forecasting problems, demonstrating state-of-the-art performance on two real-world tasks.</p
    corecore